In the following exercises we will collect and explore some data from YouTube.

1

Before we can start collecting data through the YouTube API, you first need to set up your API authorization.
To set up your YouTube API authorization you need to use the function yt_outh from the tuber package which requires the ID of your app as well as your app secret as arguments.

While going through the following exercises you might want to monitor your API quota usage via the Google Cloud Platform dashboard for your app (APIs & Services -> Dashboard -> Select YouTube Data API v3 -> Quotas) to see the query costs for the tuber function calls.

2

How many view, subscribers, and videos does the channel of the Pew Research Center currently have?
To get channel statistics you can use the get_channel_stats function which requires the ID the channel (as a string) as its main argument.

3

How many views, likes, and comments does the music video “Data Science” by Baba Brinkman have?
To answer this question you can use get_stats and need the ID of the video.

4

Collect all comments (including replies) for the video on “The Census” by Last Week Tonight with John Oliver. As we want to use the comments for later exercises, please assign the results to an object called comments_lwt_census.
To get all comments including replies you need to use the function get_all_comments.

NB: If you check the comment count on the website of the video you will see that there are more comments than in the dataframe you just created. This is because get_all_comments only collects up to 5 replies per comment.

5

As a final step we want to save the comments we just collected so we can use them again in the exercises for the following sessions. Please save the the comments as an .rds file.
To save an .rds file you can use the base R function saveRDS. Ideally, you should save the file in the folder containing the workshops materials. The code in the solution uses a relative path to save the file in your current working directory.